Goto

Collaborating Authors

 ai harm


Echoes of AI Harms: A Human-LLM Synergistic Framework for Bias-Driven Harm Anticipation

Tantalaki, Nicoleta, Vei, Sophia, Vakali, Athena

arXiv.org Artificial Intelligence

The growing influence of Artificial Intelligence (AI) systems on decision-making in critical domains has exposed their potential to cause significant harms, often rooted in biases embedded across the AI lifecycle. While existing frameworks and taxonomies document bias or harms in isolation, they rarely establish systematic links between specific bias types and the harms they cause, particularly within real-world sociotechnical contexts. Technical fixes proposed to address AI biases are ill-equipped to address them and are typically applied after a system has been developed or deployed, offering limited preventive value. We propose ECHO, a novel framework for proactive AI harm anticipation through the systematic mapping of AI bias types to harm outcomes across diverse stakeholder and domain contexts. ECHO follows a modular workflow encompassing stakeholder identification, vignette-based presentation of biased AI systems, and dual (human-LLM) harm annotation, integrated within ethical matrices for structured interpretation. This human-centered approach enables early-stage detection of bias-to-harm pathways, guiding AI design and governance decisions from the outset. We validate ECHO in two high-stakes domains (disease diagnosis and hiring), revealing domain-specific, bias-to-harm patterns and demonstrating ECHO's potential to support anticipatory governance of AI systems


Giving AI Agents Access to Cryptocurrency and Smart Contracts Creates New Vectors of AI Harm

Marino, Bill, Juels, Ari

arXiv.org Artificial Intelligence

There is growing interest in giving AI agents access to cryptocurrencies as well as to the smart contracts that transact them. But doing so, this position paper argues, could lead to formidable new vectors of AI harm. To support this argument, we first examine the unique properties of cryptocurrencies and smart contracts that could lead to these new vectors of harm. Next, we describe each of these new vectors of harm in detail. Finally, we conclude with a call for more technical research aimed at preventing and mitigating these harms and, thereby making it safer to endow AI agents with cryptocurrencies and smart contracts.


People shouldn't pay such a high price for calling out AI harms

MIT Technology Review

The G7 has just agreed a (voluntary) code of conduct that AI companies should abide by, as governments seek to minimize the harms and risks created by AI systems. And later this week, the UK will be full of AI movers and shakers attending the government's AI Safety Summit, an effort to come up with global rules on AI safety. In all, these events suggest that the narrative pushed by Silicon Valley about the "existential risk" posed by AI seems to be increasingly dominant in public discourse. This is concerning, because focusing on fixing hypothetical harms that may emerge in the future takes attention from the very real harms AI is causing today. "Existing AI systems that cause demonstrated harms are more dangerous than hypothetical'sentient' AI systems because they are real," writes Joy Buolamwini, a renowned AI researcher and activist, in her new memoir Unmasking AI: My Mission to Protect What Is Human in a World of Machines.


We need to focus on the AI harms that already exist

MIT Technology Review

One problem with minimizing existing AI harms by saying hypothetical existential harms are more important is that it shifts the flow of valuable resources and legislative attention. Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity. I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.


The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements

Feffer, Michael, Martelaro, Nikolas, Heidari, Hoda

arXiv.org Artificial Intelligence

Prior work has established the importance of integrating AI ethics topics into computer and data sciences curricula. We provide evidence suggesting that one of the critical objectives of AI Ethics education must be to raise awareness of AI harms. While there are various sources to learn about such harms, The AI Incident Database (AIID) is one of the few attempts at offering a relatively comprehensive database indexing prior instances of harms or near harms stemming from the deployment of AI technologies in the real world. This study assesses the effectiveness of AIID as an educational tool to raise awareness regarding the prevalence and severity of AI harms in socially high-stakes domains. We present findings obtained through a classroom study conducted at an R1 institution as part of a course focused on the societal and ethical considerations around AI and ML. Our qualitative findings characterize students' initial perceptions of core topics in AI ethics and their desire to close the educational gap between their technical skills and their ability to think systematically about ethical and societal aspects of their work. We find that interacting with the database helps students better understand the magnitude and severity of AI harms and instills in them a sense of urgency around (a) designing functional and safe AI and (b) strengthening governance and accountability mechanisms. Finally, we compile students' feedback about the tool and our class activity into actionable recommendations for the database development team and the broader community to improve awareness of AI harms in AI ethics education.


Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

QueerInAI, Organizers of, Dennler, Nathan, Ovalle, Anaelia, Singh, Ashwin, Soldaini, Luca, Subramonian, Arjun, Tu, Huy, Agnew, William, Ghosh, Avijit, Yee, Kyra, Peradejordi, Irene Font, Talat, Zeerak, Russo, Mayra, Pinhal, Jess de Jesus de Pinho

arXiv.org Artificial Intelligence

Bias evaluation benchmarks and dataset and model documentation have emerged as central processes for assessing the biases and harms of artificial intelligence (AI) systems. However, these auditing processes have been criticized for their failure to integrate the knowledge of marginalized communities and consider the power dynamics between auditors and the communities. Consequently, modes of bias evaluation have been proposed that engage impacted communities in identifying and assessing the harms of AI systems (e.g., bias bounties). Even so, asking what marginalized communities want from such auditing processes has been neglected. In this paper, we ask queer communities for their positions on, and desires from, auditing processes. To this end, we organized a participatory workshop to critique and redesign bias bounties from queer perspectives. We found that when given space, the scope of feedback from workshop participants goes far beyond what bias bounties afford, with participants questioning the ownership, incentives, and efficacy of bounties. We conclude by advocating for community ownership of bounties and complementing bounties with participatory processes (e.g., co-creation).


Generative AI risks concentrating Big Tech's power. Here's how to stop it.

MIT Technology Review

Both of these resources are only really available to big companies. And although some of the most exciting applications, such as OpenAI's chatbot ChatGPT and Stability.AI's image-generation AI Stable Diffusion, are created by startups, they rely on deals with Big Tech that gives them access to its vast data and computing resources. "A couple of big tech firms are poised to consolidate power through AI rather than democratize it," says Sarah Myers West, managing director of the AI Now Institute, a research nonprofit. Right now, Big Tech has a chokehold on AI. But Myers West believes we're actually at a watershed moment.


Who's going to save us from bad AI?

MIT Technology Review

That was the response from AI policy and ethics wonks to news last week that the Office of Science and Technology Policy, the White House's science and technology advisory agency, had unveiled an AI Bill of Rights. The document is Biden's vision of how the US government, technology companies, and citizens should work together to hold the AI sector accountable. The US has so far been one of the only Western nations without clear guidance on how to protect its citizens against AI harms. Tech companies say they want to mitigate these sorts of harms, but it's really hard to hold them to account. The AI Bill of Rights outlines five protections Americans should have in the AI age, including data privacy, the right to be protected from unsafe systems, and assurances that algorithms shouldn't be discriminatory and that there will always be a human alternative.


Biden proposes new "Bill of Rights" to protect Americans from AI harms

#artificialintelligence

Today, the White House proposed a "Blueprint for an AI Bill of Rights," a set of principles and practices that seek to guide "the design, use, and deployment of automated systems," with the goal of protecting the rights of Americans in "the age of artificial intelligence," according to the White House. The blueprint is a set of non-binding guidelines--or suggestions--providing a "national values statement" and a toolkit to help lawmakers and businesses build the proposed protections into policy and products. The White House crafted the blueprint, it said, after a year-long process that sought input from people across the country "on the issue of algorithmic and data-driven harms and potential remedies." The document represents a wide-ranging approach to countering potential harms in artificial intelligence. "Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public," reads the foreword of the blueprint.


AI bias is rampant. Bug bounties could help catch it.

#artificialintelligence

The 1990s might have a lot to teach us about how we should tackle harm from artificial intelligence in the 2020s. Back then, some companies found they could actually make themselves safer by incentivizing the work of independent "white hat" security researchers who would hunt for issues and disclose them in a process that looked a lot like hacking with guardrails. That's how the practice of bug bounties became a cornerstone of cybersecurity today. In a research paper unveiled Thursday, researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji and Joy Buolamwini argue that companies should once again invite their most ardent critics in -- this time, by putting bounties on harms that might originate in their artificial intelligence systems. François, a Fulbright scholar who has advised the French CTO and who played a key role in the U.S. Senate's probe of Russia's attempts to influence the 2016 election, published the report through the Algorithmic Justice League, which was founded in 2016 and "combines art and research to illuminate the social implications and harms of artificial intelligence."